- Itβs still free! - Video 1 walks you through onboarding to the course - The first live session is next week! - You can now get a certificate via exam app - We improved and written material with interactive quizzes
If youβre studying MCP and want a live, interactive, visual, certified course, then join us on the hub!
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! π₯
as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!
in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
In this course, you will: π Study Model Context Protocol in theory, design, and practice. π§βπ» Learn to use established MCP SDKs and frameworks. πΎ Share your projects and explore applications created by the community. π Participate in challenges and evaluate your MCP implementations. π Earn a certificate of completion.
At the end of this course, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
Qwen 3 Fine tuning >> MoE. Update the experiment thread to include config and script for fine-tuning the Qwen3-30B-A3B model.
The goal is to make a low latency non-thinking model for a daily driver coding, so 3 billion parameters active should be perfect.
βοΈ training running βοΈ evals running βοΈ improve dataset
The moe isn't going to fit into colab's A100 even with quantization (π @UnslothAI ). So I've been working on HF spaces' H100s for this. Everything is available in the tread and I'll share more tomorrow.
Weβre starting from the foundations of modern generative AI by looking at transformers. This chapter is expanded in depth and features so contains new material like:
FREE and CERTIFIED exam on fundamentals of transformers deeper exploration of transformer architectures and attention mechanisms end -to-end exploration of inference strategies for prefill and decode steps
The course has leveled up in complexity and depth, so this a great time to join in if you want to build you own AI models.
Hacked my presentation building with inference providers, Cohere command a, and sheer simplicity. Use this script if youβre burning too much time on presentations:
This is what it does: - uses command a to generates slides and speaker notes based on some material. - it renders the material in remark open format and imports all images, tables, etc - you can then review the slides as markdown and iterate - export to either pdf or pptx using backslide
π Next steps are: add text to speech for the audio and generate a video. This should make Hugging Face educational content scale to a billion AI Learners.